Lectures 12 and 13 - Complexity Penalized Maximum Likelihood Estimation

نویسنده

  • Rui Castro
چکیده

As you learned in previous courses, if we have a statistical model we can often estimate unknown “parameters” by the maximum likelihood principle. Suppose we have independent, but not necessarily identically distributed, data. Namely, we model the data {Yi}i=1 as independent random variables with densities (with respect to a common dominating measure) given by pi(·; θ), where θ is an unknown “parameter”1. The Maximum Likelihood Estimator (MLE) of θ is simply given by

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Penalized Bregman Divergence Estimation via Coordinate Descent

Variable selection via penalized estimation is appealing for dimension reduction. For penalized linear regression, Efron, et al. (2004) introduced the LARS algorithm. Recently, the coordinate descent (CD) algorithm was developed by Friedman, et al. (2007) for penalized linear regression and penalized logistic regression and was shown to gain computational superiority. This paper explores...

متن کامل

Penalized likelihood phylogenetic inference: bridging the parsimony-likelihood gap.

The increasing diversity and heterogeneity of molecular data for phylogeny estimation has led to development of complex models and model-based estimators. Here, we propose a penalized likelihood (PL) framework in which the levels of complexity in the underlying model can be smoothly controlled. We demonstrate the PL framework for a four-taxon tree case and investigate its properties. The PL fra...

متن کامل

FlipFlop: Fast Lasso-based Isoform Prediction as a Flow Problem

FlipFlop implements a fast method for de novo transcript discovery and abundance estimation from RNA-Seq data. It differs from Cufflinks by simultaneously performing the transcript and quantitation tasks using a penalized maximum likelihood approach, which leads to improved precision/recall. Other softwares taking this approach have an exponential complexity in the number of exons in the gene. ...

متن کامل

IMPROVING GAUSSIAN MIXTURE DENSITY ESTIMATES 1 Averaging

We apply the idea of averaging ensembles of estimators to probability density estimation. In particular we use Gaussian mixture models which are important components in many neural network applications. One variant of averaging is Breiman's \bagging", which recently produced impressive results in classiication tasks. We investigate the performance of averaging using three data sets. For compari...

متن کامل

A solution to the problem of separation in logistic regression.

The phenomenon of separation or monotone likelihood is observed in the fitting process of a logistic model if the likelihood converges while at least one parameter estimate diverges to +/- infinity. Separation primarily occurs in small samples with several unbalanced and highly predictive risk factors. A procedure by Firth originally developed to reduce the bias of maximum likelihood estimates ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013